Training Neural Networks
Back to Home
01. Instructor
02. Training Optimization
03. Testing
04. Overfitting and Underfitting
05. Early Stopping
06. Regularization
07. Regularization 2
08. Dropout
09. Local Minima
10. Random Restart
11. Vanishing Gradient
12. Other Activation Functions
13. Batch vs Stochastic Gradient Descent
14. Learning Rate Decay
15. Momentum
16. Error Functions Around the World
Back to Home
06. Regularization
Regularization
DL 53 Q Regularization
Pause
Play
% buffered
00:00
00:00
Unmute
Mute
Disable captions
Enable captions
Settings
Captions
Disabled
Quality
undefined
Speed
Normal
Captions
Go back to previous menu
Disabled
zh-CN
ZH-CN
en
EN
pt-BR
PT-BR
Quality
Go back to previous menu
Speed
Go back to previous menu
0.5×
0.75×
Normal
1.25×
1.5×
1.75×
2×
Exit fullscreen
Enter fullscreen
Play
Which gives a smaller error?
x1 + x2
10x1 + 10x2
SOLUTION:
10x1 + 10x2
Next Concept